x86/mem_sharing: resolve mm-lock order violations when forking VMs with nested p2m
authorTamas K Lengyel <tamas.lengyel@intel.com>
Fri, 8 Jan 2021 10:51:36 +0000 (11:51 +0100)
committerJan Beulich <jbeulich@suse.com>
Fri, 8 Jan 2021 10:51:36 +0000 (11:51 +0100)
commit3e6c560ea1931ff579fc2c9504a6e46e5c4572c9
tree7dd9b928e5956f17bfe5c0cdc13cc9465f3159c4
parent762c3890c89f70687fad1130e61ac99e671b2a6d
x86/mem_sharing: resolve mm-lock order violations when forking VMs with nested p2m

Several lock-order violations have been encountered while attempting to fork
VMs with nestedhvm=1 set. This patch resolves the issues.

The order violations stems from a call to p2m_flush_nestedp2m being performed
whenever the hostp2m changes. This functions always takes the p2m lock for the
nested_p2m. However, with sharing the p2m locks always have to be taken before
the sharing lock. To resolve this issue we avoid taking the sharing lock where
possible (and was actually unecessary to begin with). But we also make
p2m_flush_nestedp2m aware that the p2m lock may have already been taken and
preemptively take all nested_p2m locks before unsharing a page where taking the
sharing lock is necessary.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
xen/arch/x86/mm/mem_sharing.c
xen/arch/x86/mm/p2m.c